14 research outputs found

    Optimization for Networks and Object Recognition

    Get PDF
    The present thesis explores two different application areas of combinatorial optimization, the work presented, indeed, is two fold, since it deals with two distinct problems, one related to data transfer in networks and the other to object recognition. Caching is an essential technique to improve throughput and latency in a vast variety of applications. The core idea is to duplicate content in memories distributed across the network, which can then be exploited to deliver requested content with less congestion and delay. In particular, it has been shown that the use of caching together with smart offloading strategies in a RAN composed of evolved NodeBs (eNBs), AP (e.g., WiFi), and UEs, can significantly reduce the backhaul traffic and service latency. The traditional role of cache memories is to deliver the maximal amount of requested content locally rather than from a remote server. While this approach is optimal for single-cache systems, it has recently been shown to be, in general, significantly suboptimal for systems with multiple caches (i.e., cache networks) since it allows only additive caching gain, while instead, cache memories should be used to enable a multiplicative caching gain. Recent studies have shown that storing different portions of the content across the wireless network caches and capitalizing on the spatial reuse of device-to-device (D2D) communications, or exploiting globally cached information in order to multicast coded messages simultaneously useful to a large number of users, enables a global caching gain. We focus on the case of a single server (e.g., a base station) and multiple users, each of which caches segments of files in a finite library. Each user requests one (whole) file in the library and the server sends a common coded multicast message to satisfy all users at once. The problem consists of finding the smallest possible codeword length to satisfy such requests. To solve this problem we present two achievable caching and coded delivery scheme, and one correlation-aware caching scheme, each of them is based on a heuristic polynomial-time coloring algorithm. Automatic object recognition has become, over the last decades, a central toping the in the artificial intelligence research, with a a significant burt over the last new year with the advent of the deep learning paradigm. In this context, the objective of the work discussed in the last two chapter of this thesis is an attempt at improving the performance of a natural images classifier introducing in the loop knowledge coming from the real world, expressed in terms of probability of a set of spatial relations between the objects in the images. In different words, the framework presented in this work aims at integrating the output of standard classifiers on different image parts with some domain knowledge, encoded in a probabilistic ontology

    An Efficient Coded Multicasting Scheme Preserving the Multiplicative Caching Gain

    Full text link
    Coded multicasting has been shown to be a promis- ing approach to significantly improve the caching performance of content delivery networks with multiple caches downstream of a common multicast link. However, achievable schemes proposed to date have been shown to achieve the proved order-optimal performance only in the asymptotic regime in which the number of packets per requested item goes to infinity. In this paper, we first extend the asymptotic analysis of the achievable scheme in [1], [2] to the case of heterogeneous cache sizes and demand distributions, providing the best known upper bound on the fundamental limiting performance when the number of packets goes to infinity. We then show that the scheme achieving this upper bound quickly loses its multiplicative caching gain for finite content packetization. To overcome this limitation, we design a novel polynomial-time algorithm based on random greedy graph- coloring that, while keeping the same finite content packetization, recovers a significant part of the multiplicative caching gain. Our results show that the order-optimal coded multicasting schemes proposed to date, while useful in quantifying the fundamental limiting performance, must be properly designed for practical regimes of finite packetization.Comment: 6 pages, 7 figures, Published in Infocom CNTCV 201

    Integration of Context Information through Probabilistic Ontological Knowledge into Image Classification

    Get PDF
    The use of ontological knowledge to improve classification results is a promising line of research. The availability of a probabilistic ontology raises the possibility of combining the probabilities coming from the ontology with the ones produced by a multi-class classifier that detects particular objects in an image. This combination not only provides the relations existing between the different segments, but can also improve the classification accuracy. In fact, it is known that the contextual information can often give information that suggests the correct class. This paper proposes a possible model that implements this integration, and the experimental assessment shows the effectiveness of the integration, especially when the classifier’s accuracy is relatively low. To assess the performance of the proposed model, we designed and implemented a simulated classifier that allows a priori decisions of its performance with sufficient precision

    CompNA at SemEval-2021 Task 1: Prediction of lexical complexity analyzing heterogeneous features

    Get PDF
    This paper describes the CompNa model that has been submitted to the Lexical Complexity Prediction (LCP) shared task hosted at SemEval 2021 (Task 1). The solution is based on combining features of different nature through an ensambling method based on Decision Trees and trained using Gradient Boosting. We discuss the results of the model and highlight the features with more predictive capabilities

    SentNA @ ATE_ABSITA: Sentiment Analysis of Customer Reviews Using Boosted Trees with Lexical and Lexicon-based Features

    Get PDF
    This paper describes our submission to the tasks on Sentiment Analysis of ATE_ABSITA (Aspect Term Extraction and Aspect-Based Sentiment Analysis). In particular, we focused on Task 3 using an approach based on combining frequency of words with lexicon-based polarities and uses Boosted Trees to predict the sentiment score. This approach achieved a competitive error and, thanks to the interpretability of the building blocks, allows us to show the what elements are considered when making the prediction. We also joined Task 1 proposing a hybrid model that joins rule-based and machine learning methodologies in order to combine the advantages of both. The model proposed for Task 1 is only preliminary.Questo articolo descrive la nostra sottomissione ai tasks sulla Sentiment Analysis ATE_ABSITA (Aspect Term Extraction and Aspect-Based Sentiment Analysis). I nostri sforzi si sono concentrati sul Task 3 per il quale abbiamo adottato gli alberi di predizione (Boosted Trees) utilizzando come features di ingresso una combinazione basata sulla frequenza delle parole con la polarità derivate da un lessico. L’approccio raggiunge un errore competitivo e, grazie all’interpretabilità dei moduli intermedi, ci consente di analizzare in dettaglio gli elementi che caratterizzano maggiormente la fase di predizione. Una proposta è stata realizzata anche per il Task 1, dove abbiamo sviluppato un modello ibrido che combina un approcio basato su regole con tecniche Machine Learning. Il modello sviluppato per il Task 1 è solo in fase preliminare

    EVALITA Evaluation of NLP and Speech Tools for Italian - December 17th, 2020

    Get PDF
    Welcome to EVALITA 2020! EVALITA is the evaluation campaign of Natural Language Processing and Speech Tools for Italian. EVALITA is an initiative of the Italian Association for Computational Linguistics (AILC, http://www.ai-lc.it) and it is endorsed by the Italian Association for Artificial Intelligence (AIxIA, http://www.aixia.it) and the Italian Association for Speech Sciences (AISV, http://www.aisv.it)

    Fuzzy clustering of structured data: Some preliminary results

    No full text
    In recent years, the field of Machine Learning is showing great interest towards the processing of structured data, such as sequences, trees and graphs. In this paper an unsupervised recursive learning schema for structured data clustering is introduced. The schema allows to process data organized in graphs for both graph-focused and node-focused applications. The approach uses the Fuzzy C-Means algorithm as building block. Some experiments are proposed to show its performances and to compare it with another approach known in literature

    Play it my way Learning through play with your visually impaired child

    No full text
    SIGLEAvailable from British Library Document Supply Centre- DSC:OP-95/MISC / BLDSC - British Library Document Supply CentreGBUnited Kingdo
    corecore